Back

Journal of Computational Neuroscience

Springer Science and Business Media LLC

Preprints posted in the last 90 days, ranked by how well they match Journal of Computational Neuroscience's content profile, based on 23 papers previously published here. The average preprint has a 0.02% match score for this journal, so anything above that is already an above-average fit.

1
Phase resetting of in-phase synchronized Hodgkin-Huxleydynamics under voltage perturbation reveals reduced null space

Gupta, R.; Karmeshu, ; Singh, R. K. B.

2026-03-24 neuroscience 10.64898/2026.03.21.713085 medRxiv
Top 0.1%
17.1%
Show abstract

Voltage perturbations to a repetitively firing Hodgkin-Huxley (HH) model of neuronal spiking in the bistable regime with coexisting limit cycle and stable steady node can either lead to the spikes phase resetting or collapse to the stable steady state. The latter describes a non-firing hyperpolarized quiescent state of the neuron despite the presence of constant external current. Using asymptotic phase response curve (PRC), the impact of voltage perturbations on a repetitively firing HH model is studied here while it is diffusively coupled to another HH model under identical external stimulation. It is observed that the pre-perturbation state of synchronization and the coupling strength critically determine the PRC response of the perturbed HH dynamics. Higher coupling strengths of perfectly in-phase (anti-phase) synchronized HH models shrink (expand) the combinatorial space of perturbation strengths and the oscillation phases causing collapse to the quiescent state. This indicates reduced (enlarged) basin of attraction, viz. the null space, associated with the steady state in the HH phase space. The findings bear important implications to the spiking dynamics of diverse interneurons, as well as special cases of pyramidal neurons, coupled through electrical synapses via. gap junctions, and suggest the role of gap junction plasticity in tuning vulnerability to quiescent state in the presence of biological noise and spikelets.

2
Functional distinction between ionic and electric ephaptic effects on neuronal firing dynamics

Hauge, E.; Saetra, M. J.; Einevoll, G.; Halnes, G.

2026-03-30 neuroscience 10.64898/2026.03.26.714388 medRxiv
Top 0.1%
10.5%
Show abstract

Neuronal activity alters extracellular ion concentrations and electric potentials. Ephaptic effects refer to the feedback influence that these extracellular changes can have on neuronal activity. While electric ephaptic effects occur on a fast timescale due to extracellular potential perturbations, ionic ephaptic effects are driven by slower, accumulative changes in ion concentrations. Among the previous computational studies of ephaptic effects, the vast majority have focused exclusively on electric effects, while ionic ephaptic effects have largely been neglected. In this work, we present an electrodiffusive computational framework consisting of two-compartment neurons that interact via a shared extracellular space. By accounting for both electric potentials and ion-concentration dynamics in a self-consistent manner, our framework enables us to explore the relative roles of electric and ionic ephaptic effects. Through numerical experiments, we demonstrate that ionic and electric ephaptic interactions play very different roles. While ionic ephaptic interactions increase population firing rates, electric ephaptic interactions primarily drive subtle shifts in spike timing. Furthermore, we show that these spike shifts cause the phase difference (the distance in spike times between a small collection of neurons) to converge to a stable, unique phase difference, which we coin the ephaptic intrinsic phase preference. Author summaryNeurons predominantly communicate through synapses: specialized contact points where a brief electrical signal, known as a spike or action potential, in one neuron influences another. Neurons generate these spikes by exchanging ions with the surrounding extracellular space. This way, spiking neurons alter extracellular ion concentrations and electric potentials. Since neurons are sensitive to such changes in their environment, they can also influence one another indirectly through the shared extracellular medium. This form of non-synaptic interaction is known as ephaptic coupling. Most computational models of neuronal activity neglect ephaptic interactions, and those that include them typically consider only electric effects while ignoring ionic contributions. As a result, the relative roles of electric and ionic ephaptic effects remain poorly understood. Here, we introduce a computational framework that accounts for both mechanisms in a self-consistent way. Our results show a functional distinction: ionic ephaptic effects act slowly, regulating population firing rates, whereas electric ephaptic effects act on millisecond timescales and subtly shift spike timing. These shifts cause spike-time differences between neurons to converge to a stable value, a phenomenon we call ephaptic intrinsic phase preference.

3
A local inhibitory plasticity rule for control of neuronal firing rate and supralinear dendritic integration

Trpevski, D.; Hellgren Kotaleski, J.; Hennig, M.

2026-01-21 neuroscience 10.64898/2026.01.20.700499 medRxiv
Top 0.1%
8.6%
Show abstract

Inhibitory synapses can control a neurons firing rate and also control supralinear dendritic integration. It is not known how inhibitory synapses can learn to perform these functions using only signals available locally at the synaptic site. We study an inhibitory plasticity rule based on the Bienenstock-Cooper-Munro theory in multicompartment models of striatal projection neurons, and show that it can perform these two functions. The rule uses local voltage-gated calcium concentration in the dendrites to regulate inhibitory synaptic strength. We show that, for rate-coded inputs, the rule can achieve precise control of neuronal firing rate after changes in excitatory input rate or excitatory synaptic strength. Additionally, for sparsely-coded inputs that activate localized synaptic clusters in dendrites, the rule can either allow or inhibit the supralinear dendritic response evoked by the clustered excitatory synapses, or equalize the dendritic response arising from different clusters. Finally, we demonstrate the use of learning to inhibit supralinear dendritic integration for solving the nonlinear feature binding problem (NFBP), in tandem with a simple excitatory plasticity rule. We conclude by discussing why the collateral inhibitory synapses between striatal projection neurons could contribute to solving the NFBP with this plasticity rule. Author summaryNeurons are the main cells in the nervous system that process information. They receive signals from the bodys senses--both external and internal--and use them to guide actions such as muscle movement and the regulation of bodily functions. A neuron becomes active when incoming signals excite it strongly enough. But for neurons to work timely, precisely, and reliably, their activity needs to be shaped, modified and controlled. This is done by inhibition, which comes from specialized inhibitory neurons. In this article we study how inhibition can learn to do two of its most basic roles in the nervous system. The first is to help neurons stay responsive across a wide range of input strengths--from very weak to very strong stimulation. For example, neurons in the retina allow vision both in dim starlight and in bright sunlight, even though these conditions differ in brightness by a trillion-fold. Inhibition contributes to handling this huge range by preventing overstimulation of the neurons in bright light. The second role of inhibition is to control strong, local excitations that occur on specific dendritic branches of a neuron. These local excitations can suddenly push a neuron into activity, and inhibition controls whether such excitations are allowed or suppressed. We use a learning mechanism that is already known to exist for excitatory synapses, but here we apply it to inhibition to explore what it could achieve. The results show that if inhibitory synapses used this same learning rule, they could support the two fundamental roles of inhibition in the nervous system described above.

4
How to train your neuron: Developing a detailed, up-to-date, multipurpose model of hippocampal CA1 pyramidal cells

Tar, L.; Saray, S.; Mohacsi, M.; Freund, T. F.; Kali, S.

2026-03-20 neuroscience 10.64898/2026.03.19.712861 medRxiv
Top 0.1%
7.2%
Show abstract

Anatomically and biophysically detailed models of neurons have been widely used to study information processing in these cells. Most studies focused on understanding specific phenomena, while more general models that aim to capture various cellular processes simultaneously remain rare even though such models are required to predict neuronal behavior under more complex, natural conditions. In this study, we aimed to develop a detailed, data-driven, general-purpose biophysical model of hippocampal CA1 pyramidal neurons. We leveraged extensive morphological, biophysical and physiological data available for this cell type, and established a systematic workflow for model construction and validation that relies on our recently developed software tools. The model is based on a high-quality morphological reconstruction and includes a diverse curated set of ion channel models. After incorporating the available constraints on the distribution of ion channels, the remaining free parameters were optimized using the Neuroptimus tool to fit a variety of electrophysiological features extracted from somatic whole-cell recordings. Validation using HippoUnit confirmed the models ability to replicate key electrophysiological features, including somatic voltage responses to current input, the attenuation of synaptic potentials and backpropagating action potentials, and nonlinear synaptic integration in oblique dendrites. Our model also included active dendritic spines, modeled either explicitly or by merging their biophysical mechanisms into those of the parent dendrite. We found that many aspects of neuronal behavior were unaffected by the level of detail in modeling spines, but modeling nonlinear synaptic integration accurately required the explicit modeling of spines. Our data-driven model of CA1 pyramidal cells matching diverse experimental constraints is a general tool for the investigation of the activity and plasticity of these cells and can also be a reliable component of detailed models of the hippocampal network. Our systematic approach to building and validating general-purpose models should apply to other cell types as well. Author SummaryThe brain processes information through the activity of billions of individual neurons. To understand how these cells work, scientists build detailed computer models that reproduce their electrical behavior. These models make it possible to explore situations that are difficult or impossible to test experimentally. However, many existing neuron models were designed to explain only a few specific phenomena, which limits their usefulness in more complex settings. In this study, we developed a comprehensive computer model of a hippocampal CA1 pyramidal neuron, a cell type that plays a central role in learning and memory. We built the model using extensive experimental data and applied automated methods to ensure that it reproduces a broad range of observed neuronal behaviors. We also examined how small structures called dendritic spines--tiny protrusions where most synaptic communication occurs--affect how neurons combine incoming signals. We found that even simplified models without individual spines can capture many aspects of neuronal activity, but understanding more complex forms of signal integration requires modeling spines explicitly. Our work also supports the development of more realistic simulations of brain circuits.

5
Functionally convergent but parametrically distinct solutions: Robust degeneracy in a population of computational models of early-birth rat CA1 pyramidal neurons

Tomko, M.; Lupascu, C. A.; Filipova, A.; Jedlicka, P.; Lacinova, L.; Migliore, M.

2026-04-01 neuroscience 10.64898/2026.03.30.715207 medRxiv
Top 0.1%
6.9%
Show abstract

BackgroundFlexibility and robustness of neuronal function are closely linked to degeneracy, the ability of distinct structural or parametric configurations to produce similar functional outcomes. At the cellular level, this often manifests as ion-channel degeneracy, in which multiple combinations of intrinsic conductances yield comparable electrophysiological phenotypes. MethodologyWe used a population-based, data-driven modelling framework to generate large ensembles of biophysically detailed CA1 pyramidal neuron models constrained by somatic electrophysiological features extracted from patch-clamp recordings in acute slices from early-birth rats. 10 reconstructed morphologies were incorporated, and model populations were analyzed using parameter correlation analysis, principal component analysis, and generalization tests to assess robustness, degeneracy, and morphology dependence of intrinsic properties. ConclusionsAcross the model population, similar somatic firing behaviours emerged from widely different combinations of intrinsic parameters, demonstrating robust two-level ion channel degeneracy both within and across morphologies. Each morphology occupied a distinct region of parameter space, indicating morphology-specific compensatory effects, while weak pairwise parameter correlations suggested distributed compensation rather than tight parameter dependencies. Even with a fixed morphology, multiple parameter subspaces supported comparable electrophysiological phenotypes. Generalization across morphologies was structure-dependent and non-reciprocal, with successful parameter similarity occurring preferentially between structurally similar neurons. Interestingly, to accurately simulate spike-frequency adaptation, it was important to retain some kinetic properties of the ion channel models as free parameters during optimization. Together, these findings show that dendrite morphology shapes the valid parameter space, and similar electrophysiology of CA1 pyramidal neurons arises from the interplay between structural variability and ion-channel diversity. This work highlights the importance of population-based modelling for capturing biological variability and provides insights into how neuronal robustness might be maintained despite substantial heterogeneity, and offers a scalable pipeline for generating biophysically realistic CA1 neuron populations for use in network simulations. Author summaryNeurons must reliably process information even though their internal components, such as ion channels and cellular shape, can vary widely from cell to cell. How stable behaviour emerges from such variability is a fundamental question in neuroscience. In this study, we explored this problem using detailed computer models of early-birth rat hippocampal CA1 pyramidal neurons, a cell type that plays a central role in learning and memory. Instead of building a single "average" neuron model, we created large populations of models that all reproduced key experimental recordings but differed in their internal parameters. We found that neurons with different shapes and different combinations of ion channels could nevertheless generate similar electrical activity. This phenomenon, known as ion channel degeneracy, allows neurons to remain functional despite biological variability or perturbations. Our results show that neuronal shape strongly influences which parameter combinations are viable, but that multiple solutions exist even for the same morphology. The population of models we provide offers a resource for future studies of early-birth CA1 pyramidal cell function and dysfunction.

6
Simplified model of intrinsically bursting neurons

Bhattasali, N.; Pinto, L.; Lindsay, G. W.

2026-03-05 neuroscience 10.64898/2026.03.03.709454 medRxiv
Top 0.1%
6.2%
Show abstract

Rhythmic neural activity underlies essential biological functions such as locomotion, breathing, and feeding. Computational models are widely used to study how such rhythms emerge from interactions between neuron-level and circuit-level dynamics. Intrinsically bursting neurons are key components of many central pattern generators (CPGs), yet existing models span a tradeoff between biological realism and practical usability. Biophysical models involve many parameters that are difficult to tune, whereas abstract models often integrate poorly into neural circuit simulations. We propose a simplified model of intrinsically bursting neurons derived from a reduced non-spiking biophysical formulation. The model integrates readily into neural circuits while enabling direct and independent control of bursting characteristics, including duration, amplitude, and shape. We show that the model reproduces single-unit biophysical responses to diverse stimuli as well as circuit-level activity patterns from crustacean and mammalian CPGs. This model provides a practical tool for studying rhythm generation in neural circuits.

7
Postsynaptic integration of excitatory and inhibitory signals based on an adaptive firing threshold

Gambrell, O.; Singh, A.

2026-03-26 neuroscience 10.64898/2026.03.26.714497 medRxiv
Top 0.1%
4.9%
Show abstract

A key component of intraneuronal communication is the modulation of postsynaptic firing frequencies by stochastic transmitter release from presynaptic neurons. The time interval between successive postsynaptic firings is called the inter-spike interval (ISI), and understanding its statistics is integral to neural information processing. We start with a model of an excitatory chemical synapse with postsynaptic neuron firing governed as per a classical integrate-and-fire model. Using a first-passage time framework, we derive exact analytical results for the ISI statistical moments, revealing parameter regimes driving precision in postsynaptic action potential timing. Next, we extended this analysis to include both an excitatory and an inhibitory presynaptic connection onto the same postsynaptic neuron. We consider both a fixed postsynaptic-firing threshold and a threshold that adapts based on the postsynaptic membrane potential history. Our analysis shows that the latter adaptive threshold can result in scenarios where increasing the inhibitory input frequency increases the postsynaptic firing frequency. Moreover, we characterize parameter regimes where ISI noise is hypo-exponential or hyperexponential based on its coefficient of variation being less than or higher than one, respectively.

8
Population decoding of sound source location by receptive field neurons in the mouse superior colliculus

Mullen, B. R.; Litke, A. M.; Feldheim, D. A.

2026-01-27 neuroscience 10.64898/2026.01.26.701861 medRxiv
Top 0.1%
3.6%
Show abstract

Identifying the location of a sound source in a complex environment and assessing its importance can be crucial for survival. The superior colliculus (SC), a midbrain structure involved in sensorimotor functions, contributes to sound localization and contains auditory responsive neurons that have spatially restricted receptive fields (RFs) that are organized into a topographic map along the azimuth. However, individual auditory SC neurons have large spatial RFs, are noisy, and do not respond to the same stimulus at each trial. Therefore, when an animal is presented with a "single trial" sound, and it needs to rely on a single neuron to locate the sound source direction, the location measurement may be erroneous, missing, or have poor spatial resolution. It is expected that a more reliable and accurate determination of the sound source location will come from a population of neurons. We therefore built a population pattern Maximum Likelihood Estimation (MLE) decoder to build a model that can accurately predict the location of a stimulus given the population response. We compared three models that use either strict firing rate (FR), weighting based on equal (EW) or mutual information (MIW) and show that the MIW model works best, needing only 92 neurons to localize a stimulus with behaviorally relevant precision. Furthermore, by comparing the models fit using the responses from non-RF and RF auditory neurons, we show that only RF neurons contain the information needed to localize a sound source. These results are consistent with the hypothesis that the SC uses a population of RF neurons to determine sound source location. Author SummaryBeing able to tell where a sound is coming from and how important it is can be critical for survival. The superior colliculus, a midbrain region involved in orienting behaviors, contains neurons that respond best to sounds coming from specific locations. This suggests that the combined activity of many neurons in the SC is used to determine sound location from a single sound event. To test this idea, we modeled responses from mouse SC neurons while sounds were played from different positions in space, both along the elevation and horizon. A model that weighted the most informative neurons performed best in both directions needing only 92 neurons to localize a stimulus with behaviorally relevant precision along the azimuth. Comparing the models fit using the responses from non-RF and RF auditory neurons, we show that only RF neurons contain the information needed to localize a sound source Overall, our findings show that the SC can accurately locate sounds in both horizontal and vertical space using a population-based strategy, providing a simple and effective solution for rapid sound localization.

9
A Population Coupling Model Identifies Reduced Propagation from V1 to Higher Visual Areas During Locomotion

Xin, Q.; Urban, K. N.; Siegle, J. H.; Kass, R. E.

2026-02-06 neuroscience 10.64898/2026.02.04.703681 medRxiv
Top 0.1%
3.2%
Show abstract

Point process generalized linear models (GLMs) have been a major tool for studying coordinated activity across populations of neurons. These models typically quantify how the spiking of a single neuron depends on the past activity of other neurons at multiple time lags, and the resulting neuron-to-neuron interactions are then aggregated to obtain population-coupling effects. However, when neurons within the same population exhibit similar spiking patterns, explicitly modeling individual interactions can be redundant and can unnecessarily increase model complexity. In such cases, population-level formulations may offer a more efficient alternative. For example, biophysical population models often characterize circuit dynamics using the average firing rate across neurons within a population, and recent data-driven approaches have similarly demonstrated the utility of population-level statistics for capturing cross-population interactions. Motivated by this consideration, we reformulate the GLM framework to operate directly at the population level. The resulting model, which we call pop-GLM, provides a computationally efficient method for estimating coupling between populations. In a simulated dataset, we show that pop-GLM achieves greater sensitivity in detecting coupling effects and can account for trial-to-trial variation in stimulus drive, which would otherwise introduce bias. We also note that moving from single-neuron to population-level modeling requires a specific modification of the traditional GLM framework. We then apply pop-GLM to real data and find reduced functional connectivity from primary visual cortex (V1) to a higher visual area during locomotion, a change not detected by single-neuron GLMs. Author summaryA central goal of systems neuroscience is to understand how multiple populations of neurons across different brain areas interact as a coordinated circuit to produce perception and behavior. We formulated and investigated a new method for estimating functional interactions between two populations of spiking neurons, and we show that it can be more sensitive and robust than previous approaches. To illustrate, we discovered decreased interaction between two mouse visual areas during locomotion, a result that previous techniques did not detect. The method should aid investigators in searching for important functional relationships across populations of neurons, with precise time scale resolution.

10
Push-and-pull protein dynamics leads to log-normal synaptic sizes and probabilistic multi-spine plasticity

Petkovic, J.; Eggl, M.; Pathirana, D.; Chater, T. E.; Hasenauer, J.; Rizzoli, S.; Tchumatchenko, T.

2026-01-29 neuroscience 10.64898/2026.01.29.702571 medRxiv
Top 0.1%
3.1%
Show abstract

A typical neuron receives thousands of inputs and is able to adapt the strength of its synapses to store new information and meet ongoing computational demands. The synaptic response to plasticity induction is stochastic and spatially structured but is traditionally described by deterministic models representing the "average" dynamics. Growing experimental evidence indicates that not only the stimulation protocol determines the plasticity outcome but that the initial synaptic sizes, their fluctuations, and the spatial competition for the plasticity-relevant proteins play a decisive role. This probabilistic perspective makes it hard to predict the fate of a given synapse and requires a conceptual shift from a single synapse view to a probabilistic multi-spine competitive process where the plasticity needs and the available resources are considered together. Here, we propose a data-driven modeling framework able to predict collective plasticity outcomes along a dendrite based on the initial size, the number, and the spatial distance between simultaneously stimulated synapses. Our data analysis reveals a log-normal distribution of protein numbers for many plasticity-mediating proteins and shows that this log-normal protein allocation constrains and controls the collective plasticity outcome across multiple stimulated and non-stimulated synapses while preserving a global size distribution. Our findings highlight how local stochastic processes and global protein allocation rules give rise to synaptic plasticity outcomes, offering a new framework to understand and predict dendritic computation.

11
A Quality Measure for Repeating Multiple-Unit Spike Patterns

Palm, G.; Paoletti, M.; Ito, J.; Stella, A.; Grün, S.

2026-02-02 neuroscience 10.64898/2026.01.31.702754 medRxiv
Top 0.1%
3.1%
Show abstract

We propose a quality measure for spatio-temporal spike patterns (STPs) in multiple-neuron recordings. In such recordings, repeating STPs or pattern repetitions (PRs) are often found, with many of these generated by chance. To rule those out, statistical tests have been developed to discriminate the unlikely from the more likely PRs. This statistical problem is complicated by the fact that there are several obvious quality criteria for a PR, such as the size (the number of spikes) of the pattern and the number of its occurrences. Here, we propose a canonical way of combining several criteria (which we collect in the so-called signature of the pattern) into a single quality measure, based on the unlikeliness of the pattern. This measure is defined mathematically, and a formula for its computation is derived for stationary spike trains. It can be used to compare PRs. Since spike trains are not stationary in practice, we discuss, for two experimental data sets, how well the stationary formula correlates with the defined quality measure as determined from simulations. The results encourage the use of the stationary formula or also some simpler, related formulas as proxies for the quality, for the comparison of PRs and also for statistical tests that avoid the multiple testing problem incurred by using several quality criteria. Based on our results, we propose a few test statistics, i.e., random variables on the space of multi-unit spike trains with an appropriate null-hypothesis distribution, to evaluate STPs with less computational and sampling efforts.

12
A neurocomputational model of observation-based decision making with a focus on trust

Hassanejad Nazir, A.; Hellgren Kotaleski, J.; Liljenström, H.

2026-03-26 neuroscience 10.64898/2026.03.24.713845 medRxiv
Top 0.1%
2.8%
Show abstract

As social beings, humans make decisions partly based on social interaction. Observing the behavior of others can lead to learning from and about them, potentially increasing trust and prompting trust-based behavioral changes. Observation-based decision making involves different neural structures. The orbitofrontal cortex (OFC) and lateral prefrontal cortex (LPFC) are known as neural structures mainly involved in processing emotional and cognitive decision values, respectively, while the anterior cingulate cortex (ACC) plays a pivotal role as a social hub, integrating the afferent expectancy signals from OFC and LPFC. This paper presents a neurocomputational model of the interplay between observational learning and trust, as well as their role in individual decision-making. Our model elucidates and predicts the emotional and rational behavioral changes of an individual influenced by observing the action-outcome association of an alleged expert. We have modeled the neurodynamics of three cortical structures (OFC, LPFC, and ACC) and their interactions, where the neural oscillatory properties, modeled with Dynamic Bayesian Probability, represent the observers attitude towards the expert and the decision options. As an example of an everyday behavioral situation related to climate change, we use the choice of transportation between home and work. The EEG-like simulation outputs from our model represent the presumed brain activity of an individual making such a choice, assuming the decision-maker is exposed to social information.

13
Reconciling contradictory models of subthalamic nucleus contributions to basal ganglia beta oscillations

Tse, K. N.; Ermentrout, G. B.; Rubin, J.

2026-01-26 neuroscience 10.64898/2026.01.26.701663 medRxiv
Top 0.1%
2.7%
Show abstract

Recent computational studies of Parkinsons disease have yielded contradictory findings regarding the role of the subthalamic nucleus (STN) in pathological beta oscillations, with some models implicating STN as essential for beta generation and others suggesting that STN suppresses oscillations. This work addresses these discrepancies by systematically investigating how the specific features of the integrate-and-fire neurons used in these models influence simulated basal ganglia network dynamics. Using both rate models and spiking network simulations incorporating coupled subthalamopallidal and pallidostriatal circuits, we demonstrate that the choice between leaky integrate-and-fire (LIF) and quadratic integrate-and-fire (QIF) models to represent STN neurons fundamentally impacts the phase relationship between STN and external globus pallidus prototypical (Proto) neuron populations. QIF STN neurons establish in-phase coupling with Proto neurons, which enhances beta oscillation amplitude, while LIF STN neurons develop anti-phase relationships, which suppresses beta power. Through intervention experiments and parameter sweeps across physiologically relevant firing rates, we show that these phase-related effects persist robustly across network conditions, and we mathematically establish conditions under which these results are guaranteed to hold. Our findings reveal that the fundamental mathematical structure underlying spike generation, rather than other biophysical details, determines whether the subthalamopallidal loop acts as a beta amplifier or suppressor. This mechanistic insight reconciles contradictory findings in the literature, demonstrates that seemingly minor modeling choices can have profound consequences for understanding disease mechanisms and therapeutic targets, and offers predictions for determining which model framework reflects the biological reality. Author summarySubstantial work has explored the mechanisms underlying enhanced beta oscillations in the basal ganglia, motivated by their potential relevance to parkinsonian conditions and associated treatments. Often inferences about these mechanisms are based on simulations and reasoning that focus on features of network connectivity. We show that in fact the specific dynamical properties of the neurons in these circuits can strongly influence their emergent dynamics, with completely opposing effects arising in a given network structure depending on which neuron model is used, and we explain the factors underlying this divergence. Based on these factors, the determination of a small set of neuron properties in future biological experiments will lead to predictions about the mechanisms that can generate beta oscillations in the parkinsonian basal ganglia.

14
The variability of reflex amplitude estimates in motor unit pools depends on the phenotype distribution and discharge statistics

Schmid, L.; Klotz, T.; Röhrle, O.; Thompson, C. K.; Negro, F.; Yavuz, U. S.

2026-02-12 neuroscience 10.64898/2026.02.11.705250 medRxiv
Top 0.1%
2.7%
Show abstract

Motor unit (MU) activity during electrically or mechanically evoked reflexes is used to examine the relationship between neural inputs and MU properties. However, variations in single-MU reflex amplitudes are not fully understood and limit their reliability in determining the input-output relation of motor neurons (MNs). Using experiments and computer simulations, we investigated (i) whether MN discharge statistics and muscle activation explain the variability of reflex amplitude estimates and (ii) whether these variations are reflected differently across distinct reflex amplitude estimation methods. We analyzed MU spike trains extracted from isometric contractions of the tibialis anterior muscle at 10 % and 20 % MVC (maximum voluntary contraction). Estimating reflex amplitudes based on the peristimulus frequencygram (PSF) at 10 % MVC, the linear regression between discharge rate (DR) and reflex amplitude was always positive, with p < 0.05 in 3 out of 6 subjects; however, the linear correlation was inconsistent at 20 % MVC. We thereby observed that inter-subject variability was associated with the coefficient of variation of the interspike intervals. Furthermore, the linear correlation between DR and peristimulus time histogram (PSTH) based reflex amplitudes was inconsistent for both contraction forces. To obtain further insights into the influence of MN properties, we simulated reflexes in a heterogeneous MN population using electrical circuit models and varied MN inputs. The simulations indicate that, besides mean input current and membrane noise, MN properties also contribute to the variability of reflex amplitude estimates. The MN heterogeneity is well captured by PSF-based reflex estimates but not by PSTH-based ones. These results show that variations in amplitude estimates of individual MU reflexes are due to complex interactions between intrinsic and extrinsic factors. As PSF-based reflex amplitude estimates reflect the MN size distribution, tracking PSF-based reflex amplitudes at fixed MVC levels across individual subjects could serve as a marker for investigating spinal adaptations under (patho)physiological conditions. Author summaryMotor neurons are specialized nerve cells that control human movement. Each motor neuron activates a specific set of muscle fibers, and the functional unit consisting of a motor neuron and muscle fibers is called a motor unit. We can observe the activity of motor neurons in humans by decomposing the electrical activity of muscles (the electromyogram) into contributions from individual motor units. Reflex responses of motor units are often used to study the input-output relation of motor neurons in humans. We used a combination of experiments and computer simulations to study the factors that influence the reflex amplitude of motor units during an excitatory reflex. We found that the reflex amplitude is non-linearly influenced by a number of intrinsic and extrinsic factors, e. g., motor neuron size, but also the muscle force. Additionally, we found that these factors have different effects on the results of the two common methods used to calculate the reflex amplitude. These results provide guidance on choosing a suitable evaluation method and on interpreting reflex experiments.

15
Macaque retina simulator

Vanni, S.; Vedele, F.; Hokkanen, H.

2026-03-11 neuroscience 10.64898/2026.03.09.710551 medRxiv
Top 0.1%
2.7%
Show abstract

The primate retina dissects visual scenes into multiple retinocortical streams. The most numerous retinal ganglion cell (GC) types, midget and parasol cells, are further divided into ON and OFF subtypes. These four GC populations have anatomical and physiological asymmetries, which are reflected in the spike trains received by downstream circuits. Computational models of the visual cortex, however, rarely take GC signal processing into account. We have built a macaque retina simulator with the aim of providing biologically plausible spike trains for downstream visual cortex simulations. The simulator is based on realistic sampling density and receptive field size as a function of eccentricity, as well as on two distinct spatial and three temporal receptive field models. Starting from data from literature and earlier receptive field measurements, we synthetize distributions for receptive field parameters, from which the synthetic units are sampled. The models are restricted for monocular and monochromatic stimuli and follow data from the temporal hemiretina which is more isotropic. We show that the model patches conform to anatomical data not used in the reconstruction process and characterize the responses with respect to spatial and temporal contrast sensitivity functions. This simulator allows starting from a stimulus video and provides biologically plausible spike trains for the distinct unit types. This supports development of thalamocortical primate model systems of vision. In addition, it can provide a reference for more biophysical retina models. The independent parameters are housed in text files supporting reparameterization for particular macaque data or other primate species. Author summaryVisual environment provides a rich source of information, and the visual system structure and function has been studied for decades in many species, including humans. The most complex data in mammalian species are processed in the cerebral cortex, but to date we are still missing a functioning model of cortical computations. While the earlier anatomical and physiological data describe many details of the visual system, to understand the functional logic we need to numerically simulate the complex interactions within this system. To pave the way for simulating visual cortex computations, we have developed a functioning model for macaque retina. The neuroinformatics comprises a review and re-digitized existing retina data from literature, as well as statistics of earlier macaque receptive field data. Finally, we provide software which brings the collected neuroinformatics to life and allows researchers to convert visual input into biologically feasible spike trains for simulation experiments of visual cortex.

16
Seizure recruitment properties are dependent upon dynamotype: A modeling study

Karosas, D. M.; Saggio, M.; Stacey, W. C.

2026-02-06 neuroscience 10.64898/2026.02.04.703690 medRxiv
Top 0.1%
2.5%
Show abstract

Seizure propagation - how epileptogenic brain regions recruit less excitable regions - is poorly understood. Previous studies have used dynamical modeling to study seizure propagation and to create patient-specific whole-brain models of seizure spread. However, these studies focused on seizures of a single dynamotype (onset and offset bifurcation pair). Here, we implement a novel coupling method to investigate seizure propagation in a diverse array of dynamotypes. We utilize the Multiclass Epileptor, a recently proposed model that captures a wide range of seizure dynamotypes in a cortical mass ("node"). We consider two nodes: the seizure onset zone (node 1), which bursts autonomously, and the potential propagation zone (node 2), which is not independently epileptogenic but can be recruited by node 1. We examine the impact of intrinsic and coupling factors on the likelihood and speed of recruitment, with particular attention to the onset bifurcation of node 1. We also measure the range of onset behaviors observed in node 2 with respect to the onset behavior of node 1. The model predicted that seizures that display baseline shifts at onset are less likely to spread, and spread more slowly, compared to seizures that do not exhibit baseline shifts at onset. Seizures that present with amplitude scaling at onset were unlikely to propagate. Further, the model predicted the potential for unusual combinations of onset dynamics, such as a baseline shift in node 2 but not node 1. We confirmed the possibility for several of these unusual recruitment behaviors in humans using intracranial electroencephalography data. The results of the study provide a theoretical framework for seizure propagation, establishing a basis for innovations in characterization of patients seizure networks and identification of the seizure onset zone. Author SummaryIn this work, we examined how a seizure spreads from one part of the brain to another using a computational model. We modeled two brain regions using the Multiclass Epileptor, which reproduces a range of brain activity patterns associated with seizures. In the model, the first brain node was able to recruit the second brain node into a seizure. The model predicted that the likelihood and speed of seizure spread differ depending on the pattern of brain activity observed at the start of the seizure. We also found that the pattern of brain activity at seizure onset is not necessarily the same pattern seen when the seizure spreads. We confirmed this possibility for mismatched patterns in recordings from human brain. The findings of the study improve our understanding of seizure spread, which lays the groundwork for development of tools to quantify seizure spread and may inform future work in patient-specific brain modeling.

17
The transfer function as a tool to reduce morphological models into point-neuron models

Daou, M.; Jovanic, T.; Destexhe, A.

2026-03-24 neuroscience 10.64898/2026.03.20.713213 medRxiv
Top 0.2%
2.2%
Show abstract

Building a simple model that precisely and functionally characterizes a neuron is a challenging and important task to select the best concise and computationally efficient model. However, this type of work has only been done for subthreshold properties of neurons. Here, we take a different perspective and suggest a method to obtain point-neuron models from morphologically-detailed models with dendrites. To do this, we focus on the functional characterization of the neuron response under in vivo conditions, and compute the transfer function of the detailed model. The parameters of this transfer function, in terms of mean voltage, voltage standard deviation and correlation time, can be used to compute the "best" point-neuron model that generates a transfer function very close to that of the morphologically-detailed model. We illustrate this approach for two very different neuronal morphologies, one from Drosophila larvae and one from mammals. In conclusion, this approach provides a tool to generate point-neuron models from detailed models, based on a functional characterization of the neuron response. Significance StatementThis study provides a new computational method to reduce morphological models into point-neuron models. To do so, we calculate the transfer function parameters, ie the voltage standard deviation, the mean voltage and the correlation time, of the morphological model and fit a point neuron-model onto this data. Here, we successfully apply this approach for two very different neuron morphologies, a drosophila neuron and a rat motoneuron.

18
Origin and functional impact of early nonlinearities in primate retina

Raval, V.; Oaks-Leaf, R.; Chen, Q.; Rieke, F.

2026-03-23 neuroscience 10.64898/2026.03.19.713068 medRxiv
Top 0.2%
2.1%
Show abstract

Receptive fields provide a concise description of the stimulus selectivity of visual neurons. But this stimulus selectivity is neither static nor linear, and these nonlinear effects are not well captured by standard linear or pseudo-linear receptive field models. At the same time, receptive field models incorporating nonlinear effects are largely empirical, and are not easily interpreted in terms of underlying cellular and synaptic mechanisms. Here we show that two nonlinear mechanisms in the primate outer retina shape neural responses and that these contribute significantly to responses to natural stimuli and to the retinal output signals. Incorporating these outer retinal nonlinearities into models for visual function will improve our ability to identify the mechanistic origin of specific features of downstream visual responses.

19
Memory consolidation and representational drift

Alevi, D.; Lundt, F.; Ciceri, S.; Heiney, K.; Sprekeler, H.

2026-03-12 neuroscience 10.64898/2026.03.09.710554 medRxiv
Top 0.2%
1.9%
Show abstract

Memory consolidation is the process by which temporary, malleable memories are transformed into more stable, longer-lasting forms. On a coarse anatomical scale, consolidation redistributes memories in the brain, but it remains poorly understood how these changes manifest themselves on the finer, cellular scale of neuronal engrams and how they relate to the cognitive level. In this study, we developed a phenomenological model of engram dynamics under systems consolidation. The model describes consolidation as a brain-wide phenomenon, where memories deterministically follow a trajectory through a space of patterns distributed among brain regions. It captures a broad range of features of memory consolidation, including selective consolidation, semantization, and power-law forgetting. In the model, consolidation is accompanied by population-level changes in neuronal representations that resemble the widely observed phenomenon of representational drift. When only a subset of neurons is observed, the deterministic dynamics of the model can appear stochastic, and a readout of task features deteriorates over time even when a stable readout exists for the full system. Our model offers a dynamical systems perspective on memory consolidation as a distributed process, moving beyond the classic region-centered view, and provides a functional interpretation of drift as a means of redistributing engrams for improved memory retention.

20
Synchronization properties in C. elegans: Relating behavioral circuits to structural and functional neuronal connectivity

Sar, G. K.; Patton, A.; Towlson, E.; Davidsen, J.

2026-03-25 neuroscience 10.64898/2026.03.23.713580 medRxiv
Top 0.2%
1.9%
Show abstract

A central question in neuroscience is how neural processing generates or encodes behavior. Caenorhabditis elegans is well suited to addressing this question, given its compact nervous system and near-complete structural connectome. Despite this, findings from previous studies remain inconclusive. While some have shown that the connectome can robustly encode specific behaviors such as locomotion, others report that functional connectivity can be reconfigured across behaviors. We aim to understand the relationship between structural connectivity, functional connectivity and biological behavior in silico by using an experimentally motivated computational model leveraging the structural connectome. Stimulation of specific neurons in the model induces oscillatory neural responses, enabling us to infer neuronal functional connectivity. Functional connectivity is found to be stronger among some neurons, allowing us to identify functional communities. We find that electrical synapses play a critical role in determining functional communities, and the resulting mesoscale functional architecture is predominantly gap junctionally assortative. Furthermore, comparison with behavioral circuits shows that locomotion circuits are largely segregated into distinct functional communities while other circuits are more distributed across multiple functional communities. We also observe that stimulation of neurons belonging to these distributed circuits elicits a more synchronized neuronal response compared to stimulation of neurons within the more segregated circuits. This is consistent with the presence of behavioral patterns that originate in one circuit and terminate in another (e.g., chemosensation leading to locomotion), such that stimulation of one circuit can activate the other and eventually result in a synchronized response. We also find a large repertoire of chimera-like synchronization patterns upon stimulation of certain behavioral circuits (chemosensation, mechanosensation) indicating high dynamical flexibility. Overall, our results demonstrate that while certain behaviors are governed by functionally segregated circuits, others emerge from the synchronization of multiple functional communities, which are, to begin with, influenced by the underlying structural connectivity. Author summaryAnimals constantly transform sensory inputs into actions, but it is still unclear how this mapping from neural activity to behavior is implemented in a real nervous system. Caenorhabditis elegans offers a unique testbed for this question because its entire wiring diagram is nearly completely mapped. Yet, previous works have reached mixed conclusions about how well this anatomical circuit diagram predicts actual patterns of activity and behavior. Here, we use a biologically inspired computational model of the C. elegans nervous system to bridge this gap between structure, function, and behavior. By virtually stimulating individual neurons and observing the resulting network-wide oscillations, we infer how strongly different pairs and groups of neurons interact in functional terms. We then use network analysis tools to identify groups of neurons that tend to co-activate, and relate these functional communities to known behavioral circuits for locomotion and sensory processing. We find that gap junctions play a key role in shaping functional communities, and that locomotion-related neurons are more functionally segregated than neurons involved in other behaviors, which are more functionally distributed. Our results suggest that some behaviors rely on specialized, functionally isolated circuits, whereas others emerge from the coordinated activity of multiple functional communities.